Three-Dimension Attention Mechanism and Self-Supervised Pretext Task for Augmenting Few-Shot Learning

نویسندگان

چکیده

The main challenge of few-shot learning lies in the limited labeled sample data. In addition, since image-level labels are usually not accurate describing features images, it leads to difficulty for model have good generalization ability and robustness. This problem has been well solved yet, existing metric-based methods still room improvement. To address this issue, we propose a method based on three-dimension attention mechanism self-supervised learning. module is used extract more representative by focusing semantically informative through spatial channel attention. Self-supervised mainly adopts proxy task rotation transformation, which increases semantic information without requiring additional manual labeling, uses training combination with supervised loss function improve We conducted extensive experiments four popular datasets achieved state-of-the-art performance both 5-shot 1-shot scenarios. Experiment results show that our work provides novel remarkable approach

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Meta-Learning for Semi-Supervised Few-Shot Classification

In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corres...

متن کامل

Semi-Supervised Few-Shot Learning with Prototypical Networks

We consider the problem of semi-supervised few-shot classification (when the few labeled samples are accompanied with unlabeled data) and show how to adapt the Prototypical Networks [10] to this problem. We first show that using larger and better regularized prototypical networks can improve the classification accuracy. We then show further improvements by making use of unlabeled data.

متن کامل

Few-shot Learning

Though deep neural networks have shown great success in the large data domain, they generally perform poorly on few-shot learning tasks, where a classifier has to quickly generalize after seeing very few examples from each class. The general belief is that gradient-based optimization in high capacity classifiers requires many iterative steps over many examples to perform well. Here, we propose ...

متن کامل

Prototypical Networks for Few-shot Learning

A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space...

متن کامل

One-shot and few-shot learning of word embeddings

Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2023

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2023.3285721